761 research outputs found
Transport-Based Neural Style Transfer for Smoke Simulations
Artistically controlling fluids has always been a challenging task.
Optimization techniques rely on approximating simulation states towards target
velocity or density field configurations, which are often handcrafted by
artists to indirectly control smoke dynamics. Patch synthesis techniques
transfer image textures or simulation features to a target flow field. However,
these are either limited to adding structural patterns or augmenting coarse
flows with turbulent structures, and hence cannot capture the full spectrum of
different styles and semantically complex structures. In this paper, we propose
the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric
smoke data. Our method is able to transfer features from natural images to
smoke simulations, enabling general content-aware manipulations ranging from
simple patterns to intricate motifs. The proposed algorithm is physically
inspired, since it computes the density transport from a source input smoke to
a desired target configuration. Our transport-based approach allows direct
control over the divergence of the stylization velocity field by optimizing
incompressible and irrotational potentials that transport smoke towards
stylization. Temporal consistency is ensured by transporting and aligning
subsequent stylized velocities, and 3D reconstructions are computed by
seamlessly merging stylizations from different camera viewpoints.Comment: ACM Transaction on Graphics (SIGGRAPH ASIA 2019), additional
materials: http://www.byungsoo.me/project/neural-flow-styl
06221 Abstracts Collection -- Computational Aestethics in Graphics, Visualization and Imaging
From 28.05.06 to 02.06.06, the Dagstuhl Seminar 06221 ``Computational Aesthetics in Graphics, Visualization and Imaging\u27\u27 was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Controlling Perceptual Factors in Neural Style Transfer
Neural Style Transfer has shown very exciting results enabling new forms of
image manipulation. Here we extend the existing method to introduce control
over spatial location, colour information and across spatial scale. We
demonstrate how this enhances the method by allowing high-resolution controlled
stylisation and helps to alleviate common failure cases such as applying ground
textures to sky regions. Furthermore, by decomposing style into these
perceptual factors we enable the combination of style information from multiple
sources to generate new, perceptually appealing styles from existing ones. We
also describe how these methods can be used to more efficiently produce large
size, high-quality stylisation. Finally we show how the introduced control
measures can be applied in recent methods for Fast Neural Style Transfer.Comment: Accepted at CVPR201
Lagrangian Neural Style Transfer for Fluids
Artistically controlling the shape, motion and appearance of fluid
simulations pose major challenges in visual effects production. In this paper,
we present a neural style transfer approach from images to 3D fluids formulated
in a Lagrangian viewpoint. Using particles for style transfer has unique
benefits compared to grid-based techniques. Attributes are stored on the
particles and hence are trivially transported by the particle motion. This
intrinsically ensures temporal consistency of the optimized stylized structure
and notably improves the resulting quality. Simultaneously, the expensive,
recursive alignment of stylization velocity fields of grid approaches is
unnecessary, reducing the computation time to less than an hour and rendering
neural flow stylization practical in production settings. Moreover, the
Lagrangian representation improves artistic control as it allows for
multi-fluid stylization and consistent color transfer from images, and the
generality of the method enables stylization of smoke and liquids likewise.Comment: ACM Transaction on Graphics (SIGGRAPH 2020), additional materials:
http://www.byungsoo.me/project/lnst/index.htm
Text-driven Editing of 3D Scenes without Retraining
Numerous diffusion models have recently been applied to image synthesis and
editing. However, editing 3D scenes is still in its early stages. It poses
various challenges, such as the requirement to design specific methods for
different editing types, retraining new models for various 3D scenes, and the
absence of convenient human interaction during editing. To tackle these issues,
we introduce a text-driven editing method, termed DN2N, which allows for the
direct acquisition of a NeRF model with universal editing capabilities,
eliminating the requirement for retraining. Our method employs off-the-shelf
text-based editing models of 2D images to modify the 3D scene images, followed
by a filtering process to discard poorly edited images that disrupt 3D
consistency. We then consider the remaining inconsistency as a problem of
removing noise perturbation, which can be solved by generating training data
with similar perturbation characteristics for training. We further propose
cross-view regularization terms to help the generalized NeRF model mitigate
these perturbations. Our text-driven method allows users to edit a 3D scene
with their desired description, which is more friendly, intuitive, and
practical than prior works. Empirical results show that our method achieves
multiple editing types, including but not limited to appearance editing,
weather transition, material changing, and style transfer. Most importantly,
our method generalizes well with editing abilities shared among a set of model
parameters without requiring a customized editing model for some specific
scenes, thus inferring novel views with editing effects directly from user
input. The project website is available at http://sk-fun.fun/DN2NComment: Project Website: http://sk-fun.fun/DN2
Locally Stylized Neural Radiance Fields
In recent years, there has been increasing interest in applying stylization
on 3D scenes from a reference style image, in particular onto neural radiance
fields (NeRF). While performing stylization directly on NeRF guarantees
appearance consistency over arbitrary novel views, it is a challenging problem
to guide the transfer of patterns from the style image onto different parts of
the NeRF scene. In this work, we propose a stylization framework for NeRF based
on local style transfer. In particular, we use a hash-grid encoding to learn
the embedding of the appearance and geometry components, and show that the
mapping defined by the hash table allows us to control the stylization to a
certain extent. Stylization is then achieved by optimizing the appearance
branch while keeping the geometry branch fixed. To support local style
transfer, we propose a new loss function that utilizes a segmentation network
and bipartite matching to establish region correspondences between the style
image and the content images obtained from volume rendering. Our experiments
show that our method yields plausible stylization results with novel view
synthesis while having flexible controllability via manipulating and
customizing the region correspondences.Comment: ICCV 202
DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields
In this paper, we address the challenging problem of 3D toonification, which
involves transferring the style of an artistic domain onto a target 3D face
with stylized geometry and texture. Although fine-tuning a pre-trained 3D GAN
on the artistic domain can produce reasonable performance, this strategy has
limitations in the 3D domain. In particular, fine-tuning can deteriorate the
original GAN latent space, which affects subsequent semantic editing, and
requires independent optimization and storage for each new style, limiting
flexibility and efficient deployment. To overcome these challenges, we propose
DeformToon3D, an effective toonification framework tailored for hierarchical 3D
GAN. Our approach decomposes 3D toonification into subproblems of geometry and
texture stylization to better preserve the original latent space. Specifically,
we devise a novel StyleField that predicts conditional 3D deformation to align
a real-space NeRF to the style space for geometry stylization. Thanks to the
StyleField formulation, which already handles geometry stylization well,
texture stylization can be achieved conveniently via adaptive style mixing that
injects information of the artistic domain into the decoder of the pre-trained
3D GAN. Due to the unique design, our method enables flexible style degree
control and shape-texture-specific style swap. Furthermore, we achieve
efficient training without any real-world 2D-3D training pairs but proxy
samples synthesized from off-the-shelf 2D toonification models.Comment: ICCV 2023. Code: https://github.com/junzhezhang/DeformToon3D Project
page: https://www.mmlab-ntu.com/project/deformtoon3d
- …