285 research outputs found
Photorealistic Style Transfer with Screened Poisson Equation
Recent work has shown impressive success in transferring painterly style to
images. These approaches, however, fall short of photorealistic style transfer.
Even when both the input and reference images are photographs, the output still
exhibits distortions reminiscent of a painting. In this paper we propose an
approach that takes as input a stylized image and makes it more photorealistic.
It relies on the Screened Poisson Equation, maintaining the fidelity of the
stylized image while constraining the gradients to those of the original input
image. Our method is fast, simple, fully automatic and shows positive progress
in making a stylized image photorealistic. Our results exhibit finer details
and are less prone to artifacts than the state-of-the-art.Comment: presented in BMVC 201
CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer
Content affinity loss including feature and pixel affinity is a main problem
which leads to artifacts in photorealistic and video style transfer. This paper
proposes a new framework named CAP-VSTNet, which consists of a new reversible
residual network and an unbiased linear transform module, for versatile style
transfer. This reversible residual network can not only preserve content
affinity but not introduce redundant information as traditional reversible
networks, and hence facilitate better stylization. Empowered by Matting
Laplacian training loss which can address the pixel affinity loss problem led
by the linear transform, the proposed framework is applicable and effective on
versatile style transfer. Extensive experiments show that CAP-VSTNet can
produce better qualitative and quantitative results in comparison with the
state-of-the-art methods.Comment: CVPR 202
Ref-NPR: Reference-Based Non-Photorealistic Radiance Fields for Controllable Scene Stylization
Current 3D scene stylization methods transfer textures and colors as styles
using arbitrary style references, lacking meaningful semantic correspondences.
We introduce Reference-Based Non-Photorealistic Radiance Fields (Ref-NPR) to
address this limitation. This controllable method stylizes a 3D scene using
radiance fields with a single stylized 2D view as a reference. We propose a ray
registration process based on the stylized reference view to obtain pseudo-ray
supervision in novel views. Then we exploit semantic correspondences in content
images to fill occluded regions with perceptually similar styles, resulting in
non-photorealistic and continuous novel view sequences. Our experimental
results demonstrate that Ref-NPR outperforms existing scene and video
stylization methods regarding visual quality and semantic correspondence. The
code and data are publicly available on the project page at
https://ref-npr.github.io.Comment: Accepted by CVPR2023. 17 pages, 20 figures. Project page:
https://ref-npr.github.io, Code: https://github.com/dvlab-research/Ref-NP
Ultrafast Photorealistic Style Transfer via Neural Architecture Search
The key challenge in photorealistic style transfer is that an algorithm
should faithfully transfer the style of a reference photo to a content photo
while the generated image should look like one captured by a camera. Although
several photorealistic style transfer algorithms have been proposed, they need
to rely on post- and/or pre-processing to make the generated images look
photorealistic. If we disable the additional processing, these algorithms would
fail to produce plausible photorealistic stylization in terms of detail
preservation and photorealism. In this work, we propose an effective solution
to these issues. Our method consists of a construction step (C-step) to build a
photorealistic stylization network and a pruning step (P-step) for
acceleration. In the C-step, we propose a dense auto-encoder named PhotoNet
based on a carefully designed pre-analysis. PhotoNet integrates a feature
aggregation module (BFA) and instance normalized skip links (INSL). To generate
faithful stylization, we introduce multiple style transfer modules in the
decoder and INSLs. PhotoNet significantly outperforms existing algorithms in
terms of both efficiency and effectiveness. In the P-step, we adopt a neural
architecture search method to accelerate PhotoNet. We propose an automatic
network pruning framework in the manner of teacher-student learning for
photorealistic stylization. The network architecture named PhotoNAS resulted
from the search achieves significant acceleration over PhotoNet while keeping
the stylization effects almost intact. We conduct extensive experiments on both
image and video transfer. The results show that our method can produce
favorable results while achieving 20-30 times acceleration in comparison with
the existing state-of-the-art approaches. It is worth noting that the proposed
algorithm accomplishes better performance without any pre- or post-processing
- …