333 research outputs found
Pixelated Semantic Colorization
While many image colorization algorithms have recently shown the capability
of producing plausible color versions from gray-scale photographs, they still
suffer from limited semantic understanding. To address this shortcoming, we
propose to exploit pixelated object semantics to guide image colorization. The
rationale is that human beings perceive and distinguish colors based on the
semantic categories of objects. Starting from an autoregressive model, we
generate image color distributions, from which diverse colored results are
sampled. We propose two ways to incorporate object semantics into the
colorization model: through a pixelated semantic embedding and a pixelated
semantic generator. Specifically, the proposed convolutional neural network
includes two branches. One branch learns what the object is, while the other
branch learns the object colors. The network jointly optimizes a color
embedding loss, a semantic segmentation loss and a color generation loss, in an
end-to-end fashion. Experiments on PASCAL VOC2012 and COCO-stuff reveal that
our network, when trained with semantic segmentation labels, produces more
realistic and finer results compared to the colorization state-of-the-art
Deep Video Color Propagation
Traditional approaches for color propagation in videos rely on some form of
matching between consecutive video frames. Using appearance descriptors, colors
are then propagated both spatially and temporally. These methods, however, are
computationally expensive and do not take advantage of semantic information of
the scene. In this work we propose a deep learning framework for color
propagation that combines a local strategy, to propagate colors frame-by-frame
ensuring temporal stability, and a global strategy, using semantics for color
propagation within a longer range. Our evaluation shows the superiority of our
strategy over existing video and image color propagation methods as well as
neural photo-realistic style transfer approaches.Comment: BMVC 201
Pixelated semantic colorization
While many image colorization algorithms have recently shown the capability of producing plausible color versions from grayscale photographs, they still suffer from limited semantic understanding. To address this shortcoming, we propose to exploit
pixelated object semantics to guide image colorization. The rationale is that human beings perceive and distinguish colors
based on the semantic categories of objects. Starting from an autoregressive model, we generate image color distributions, from
which diverse colored results are sampled. We propose two ways to incorporate object semantics into the colorization model:
through a pixelated semantic embedding and a pixelated semantic generator. Specifically, the proposed network includes two
branches. One branch learns what the object is, while the other branch learns the object colors. The network jointly optimizes
a color embedding loss, a semantic segmentation loss and a color generation loss, in an end-to-end fashion. Experiments on
Pascal VOC2012 and COCO-stuff reveal that our network, when trained with semantic segmentation labels, produces more
realistic and finer results compared to the colorization state-of-the-art
L-CAD: Language-based Colorization with Any-level Descriptions using Diffusion Priors
Language-based colorization produces plausible and visually pleasing colors
under the guidance of user-friendly natural language descriptions. Previous
methods implicitly assume that users provide comprehensive color descriptions
for most of the objects in the image, which leads to suboptimal performance. In
this paper, we propose a unified model to perform language-based colorization
with any-level descriptions. We leverage the pretrained cross-modality
generative model for its robust language understanding and rich color priors to
handle the inherent ambiguity of any-level descriptions. We further design
modules to align with input conditions to preserve local spatial structures and
prevent the ghosting effect. With the proposed novel sampling strategy, our
model achieves instance-aware colorization in diverse and complex scenarios.
Extensive experimental results demonstrate our advantages of effectively
handling any-level descriptions and outperforming both language-based and
automatic colorization methods. The code and pretrained models are available
at: https://github.com/changzheng123/L-CAD
Improved Diffusion-based Image Colorization via Piggybacked Models
Image colorization has been attracting the research interests of the
community for decades. However, existing methods still struggle to provide
satisfactory colorized results given grayscale images due to a lack of
human-like global understanding of colors. Recently, large-scale Text-to-Image
(T2I) models have been exploited to transfer the semantic information from the
text prompts to the image domain, where text provides a global control for
semantic objects in the image. In this work, we introduce a colorization model
piggybacking on the existing powerful T2I diffusion model. Our key idea is to
exploit the color prior knowledge in the pre-trained T2I diffusion model for
realistic and diverse colorization. A diffusion guider is designed to
incorporate the pre-trained weights of the latent diffusion model to output a
latent color prior that conforms to the visual semantics of the grayscale
input. A lightness-aware VQVAE will then generate the colorized result with
pixel-perfect alignment to the given grayscale image. Our model can also
achieve conditional colorization with additional inputs (e.g. user hints and
texts). Extensive experiments show that our method achieves state-of-the-art
performance in terms of perceptual quality.Comment: project page: https://piggyback-color.github.io
- …