173 research outputs found

    Comparative Study of OpenCV Inpainting Algorithms

    Get PDF
    Digital image processing has been a significant and important part in the realm of computing science since its inception. It entails the methods and techniques that are used to manipulate a digital image using a digital computer. It is a type of signal processing in which the input and output maybe image or features/characteristics associated with that image. In this age of advanced technology, digital image processing has its uses manifold, some major fields being image restoration, medical field, computer vision, color processing, pattern recognition and video processing. Image inpainting is one such important domain of image processing. It is a form of image restoration and conservation. This paper presents a comparative study of the various digital inpainting algorithms provided by Open CV (a popular image processing library) and also identifies the most effective inpainting algorithm on the basis of Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM) and runtime metrics

    Telea ve Naiver Stokes Algoritmaları Kullanılarak Görüntülerdeki Bozulmaları Düzeltme

    Get PDF
    Görseller üzerindeki bozulmaları düzeltmek veya görsel üzerindeki istenilmeyen bazı kısımları, görselin orijinal halini bilmeyen kişilerin algılayamayacağı şekilde kaldırmak veya değiştirmek insanların çok uzun zamandır talep ettiği işlemlerdir. Bilgisayarların bu işlemler için kullanılması hem işlemin kalitesini arttırmış hem de işlemi kolaylaştırmıştır, fakat bilgisayar ortamında yapılıyor olsa da görsel üzerindeki işlemler halen manuel olarak yapılmaktadır. Görüntü boyama (Image Inpainting) yöntemi ile bu işlem hem daha hızlı yapılmaya başlanmış hem de işlem otomatikleştirilmiştir. Open CV kütüphanesi için geliştirilen inpaint_telea ve inpaint_ns sınıfları ile görsel üzerinde görüntü boyama işlemi yapılabilmektedir

    A Comparison on Features Efficiency in Automatic Reconstruction of Archeological Broken Objects

    Get PDF
    Automatic reconstruction of archeological broken objects is an invaluable tool for restoration purposes and personnel. In this paper, we assume that broken pieces have similar characteristics on their common boundaries, when they are correctly combined. In this paper we work in a framework for the full reconstruction of the original objects using texture and surface design information on the sherd. The texture of a band outside the border of pieces is predicted by inpainting and texture synthesis methods. Feature values are derived from these original and predicted images of pieces. We present a quantitative and qualitative comparison over a large set of features and over a large set of synthetic and real archeological broken objects

    STEFANN: Scene Text Editor using Font Adaptive Neural Network

    Full text link
    Textual information in a captured scene plays an important role in scene interpretation and decision making. Though there exist methods that can successfully detect and interpret complex text regions present in a scene, to the best of our knowledge, there is no significant prior work that aims to modify the textual information in an image. The ability to edit text directly on images has several advantages including error correction, text restoration and image reusability. In this paper, we propose a method to modify text in an image at character-level. We approach the problem in two stages. At first, the unobserved character (target) is generated from an observed character (source) being modified. We propose two different neural network architectures - (a) FANnet to achieve structural consistency with source font and (b) Colornet to preserve source color. Next, we replace the source character with the generated character maintaining both geometric and visual consistency with neighboring characters. Our method works as a unified platform for modifying text in images. We present the effectiveness of our method on COCO-Text and ICDAR datasets both qualitatively and quantitatively.Comment: Accepted in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 202
    corecore