108 research outputs found

    cGAN-based Manga Colorization Using a Single Training Image

    Full text link
    The Japanese comic format known as Manga is popular all over the world. It is traditionally produced in black and white, and colorization is time consuming and costly. Automatic colorization methods generally rely on greyscale values, which are not present in manga. Furthermore, due to copyright protection, colorized manga available for training is scarce. We propose a manga colorization method based on conditional Generative Adversarial Networks (cGAN). Unlike previous cGAN approaches that use many hundreds or thousands of training images, our method requires only a single colorized reference image for training, avoiding the need of a large dataset. Colorizing manga using cGANs can produce blurry results with artifacts, and the resolution is limited. We therefore also propose a method of segmentation and color-correction to mitigate these issues. The final results are sharp, clear, and in high resolution, and stay true to the character's original color scheme.Comment: 8 pages, 13 figure

    Estimation of Scribble Placement for Painting Colorization

    Get PDF
    Image colorization has been a topic of interest since the mid 70’s and several algorithms have been proposed that given a grayscale image and color scribbles (hints) produce a colorized image. Recently, this approach has been introduced in the field of art conservation and cultural heritage, where B&W photographs of paintings at previous stages have been colorized. However, the questions of what is the minimum number of scribbles necessary and where they should be placed in an image remain unexplored. Here we address this limitation using an iterative algorithm that provides insights as to the relationship between locally vs. globally important scribbles. Given a color image we randomly select scribbles and we attempt to color the grayscale version of the original.We define a scribble contribution measure based on the reconstruction error. We demonstrate our approach using a widely used colorization algorithm and images from a Picasso painting and the peppers test image. We show that areas isolated by thick brushstrokes or areas with high textural variation are locally important but contribute very little to the overall representation accuracy. We also find that for the case of Picasso on average 10% of scribble coverage is enough and that flat areas can be presented by few scribbles. The proposed method can be used verbatim to test any colorization algorithm

    Example-based image colorization using locality consistent sparse representation

    Get PDF
    —Image colorization aims to produce a natural looking color image from a given grayscale image, which remains a challenging problem. In this paper, we propose a novel examplebased image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target grayscale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target grayscale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms state-ofthe-art methods, both visually and quantitatively using a user stud

    Dynamic Weights Equations for Converting Grayscale Image to RGB Image

    Get PDF
    طريقة تحويل الصور الملونة من نظام الوان العرض إلى الصور الرمادي هو عملية بسيطة باستخدام طريقة الأوزان الثابتة للتحويل، ولكن باستخدام نفس الأوزان لاستعادة اللون من نفس الصور ليست عملية فعالة لجميع أنواع الصور لأن الصورة الرمادية تحتوي على معلومات قليلة، وغير كافية لاجراء عملية التحويل. الفكرة الأساسية في هذا البحث هي استخدام المعادلات الرياضية المستخرجة من الصورة الرمادية في عملية التحويل ،حيث يقدم هذا البحث طريقة تلوين الصورة الرمادية باستخدام الأوزان المستمدة من خصائص الصورة الرمادية. وقد تم استخراج مقياس (الانحراف، المتوسط، والانحراف المعياري) من خصائص الصور الرمادية واعتمادها في تحديد الأوزان اللازمة لنظام الوان العرض. أثبتت هذه الطريقة نجاحها في تلوين الصور مقارنة مع الطريقة التقليدية المعتمدة على الأوزان الثابتة لتلوين الصور لأنها تعتمد على الأوزان الثابتة لتحويل جميع الصور انواع الصور الرمادية.The method of converting color images from the RGB color system to grayscale images is a simple operation by using the fixed weights method of conversion, but using the same weights to restore the color of the same images is not an effective operation of all types of images because the grayscale image contains little information and it isn't worthy of conversion operation. The basic idea in this paper is to employ the mathematics equations which extracted from the grayscale image in conversion operation, this paper presents the method of coloring the grayscale image by using the weights derived from the characteristics of the grayscale image. Skewness, Mean and Standard deviation moments have been extracted from the features of grayscale images and its adoption the determine weights of the RGB color system. This method proved its success in coloring images compared to the traditional method adoption of fixed weights for coloring images because it relies on fixed weights for converting all grayscale images

    ANALISIS TEKSTUR PHOTO LAMA MENGGUNAKAN FITUR TEKSTUR GRAY LEVEL CO-OCCURRENCE MATRIKS PADA PEWARNAAN CITRA OTOMATIS

    Get PDF
     Image processing is important in a process of introduction, classification or segmentation or other processes. One thing that can be done is an analysis of the texture features related to old photos in this case grayscale photos. The object of the research can be an old photo (image) and use a statistical method based on Gray Level Counseling Matrix (GLCM). GLCM is one of the methods used for extracting texture features, some of which are analyzed using glcm by comparing the GLCM texture feature in the old photo with the original photo The coloring process is to provide more visualization of an object, it can be a monochrome image or video with the aim of providing details and clarity of the colored image or video. The study discusses grayscale images to be colored, then searches for GLCM texture feature values. The size of the features obtained from the calculation is used to find out how much the error value indirectly shows how much the image is similar. The measurement of the success of the small scale using the method of Mean Square Error (MSE) and Mean Absolute Error (MAE).Keyword: Texture, Glcm, MAE, MSE

    Colorization and Automated Segmentation of Human T2 MR Brain Images for Characterization of Soft Tissues

    Get PDF
    Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described
    corecore