206 research outputs found
Deep Video Color Propagation
Traditional approaches for color propagation in videos rely on some form of
matching between consecutive video frames. Using appearance descriptors, colors
are then propagated both spatially and temporally. These methods, however, are
computationally expensive and do not take advantage of semantic information of
the scene. In this work we propose a deep learning framework for color
propagation that combines a local strategy, to propagate colors frame-by-frame
ensuring temporal stability, and a global strategy, using semantics for color
propagation within a longer range. Our evaluation shows the superiority of our
strategy over existing video and image color propagation methods as well as
neural photo-realistic style transfer approaches.Comment: BMVC 201
UniColor: A Unified Framework for Multi-Modal Colorization with Transformer
We propose the first unified framework UniColor to support colorization in
multiple modalities, including both unconditional and conditional ones, such as
stroke, exemplar, text, and even a mix of them. Rather than learning a separate
model for each type of condition, we introduce a two-stage colorization
framework for incorporating various conditions into a single model. In the
first stage, multi-modal conditions are converted into a common representation
of hint points. Particularly, we propose a novel CLIP-based method to convert
the text to hint points. In the second stage, we propose a Transformer-based
network composed of Chroma-VQGAN and Hybrid-Transformer to generate diverse and
high-quality colorization results conditioned on hint points. Both qualitative
and quantitative comparisons demonstrate that our method outperforms
state-of-the-art methods in every control modality and further enables
multi-modal colorization that was not feasible before. Moreover, we design an
interactive interface showing the effectiveness of our unified framework in
practical usage, including automatic colorization, hybrid-control colorization,
local recolorization, and iterative color editing. Our code and models are
available at https://luckyhzt.github.io/unicolor.Comment: Accepted by SIGGRAPH Asia 2022. Project page:
https://luckyhzt.github.io/unicolo
- …