3,272 research outputs found
Example-based image colorization using locality consistent sparse representation
—Image colorization aims to produce a natural looking color image from a given grayscale image, which remains a challenging problem. In this paper, we propose a novel examplebased image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target grayscale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target grayscale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms state-ofthe-art methods, both visually and quantitatively using a user stud
Depth Map Estimation and Colorization of Anaglyph Images Using Local Color Prior and Reverse Intensity Distribution
In this paper, we present a joint iterative anaglyph stereo matching and colorization framework for obtaining a set of disparity maps and colorized images. Conventional stereo matching algorithms fail when addressing anaglyph images that do not have similar intensities on their two respective view images. To resolve this problem, we propose two novel data costs using local color prior and reverse intensity distribution factor for obtaining accurate depth maps. To colorize an anaglyph image, each pixel in one view is warped to another view using the obtained disparity values of non-occluded regions. A colorization algorithm using optimization is then employed with additional constraint to colorize the remaining occluded regions. Experimental results confirm that the proposed unified framework is robust and produces accurate depth maps and colorized stereo images.National Research Foundation of Korea (Basic Science Research Program (Ministry of Education, NRF-2012R1A1A2009495))National Research Foundation of Korea (Korea government (MSIP), grant No. NRF-2013R1A2A2A01069181
Estimation of Scribble Placement for Painting Colorization
Image colorization has been a topic of interest since
the mid 70’s and several algorithms have been proposed that
given a grayscale image and color scribbles (hints) produce a colorized image. Recently, this approach has been introduced in the field of art conservation and cultural heritage, where B&W photographs of paintings at previous stages have been colorized. However, the questions of what is the minimum number of scribbles necessary and where they should be placed in an image remain unexplored. Here we address this limitation using an iterative algorithm that provides insights as to the relationship between locally vs. globally important scribbles. Given a color image we randomly select scribbles and we attempt to color the
grayscale version of the original.We define a scribble contribution measure based on the reconstruction error. We demonstrate our approach using a widely used colorization algorithm and images from a Picasso painting and the peppers test image. We show that areas isolated by thick brushstrokes or areas with high textural variation are locally important but contribute very little to the
overall representation accuracy. We also find that for the case of Picasso on average 10% of scribble coverage is enough and that flat areas can be presented by few scribbles. The proposed method can be used verbatim to test any colorization algorithm
cGAN-based Manga Colorization Using a Single Training Image
The Japanese comic format known as Manga is popular all over the world. It is
traditionally produced in black and white, and colorization is time consuming
and costly. Automatic colorization methods generally rely on greyscale values,
which are not present in manga. Furthermore, due to copyright protection,
colorized manga available for training is scarce. We propose a manga
colorization method based on conditional Generative Adversarial Networks
(cGAN). Unlike previous cGAN approaches that use many hundreds or thousands of
training images, our method requires only a single colorized reference image
for training, avoiding the need of a large dataset. Colorizing manga using
cGANs can produce blurry results with artifacts, and the resolution is limited.
We therefore also propose a method of segmentation and color-correction to
mitigate these issues. The final results are sharp, clear, and in high
resolution, and stay true to the character's original color scheme.Comment: 8 pages, 13 figure
- …