6 research outputs found
Transport-Based Neural Style Transfer for Smoke Simulations
Artistically controlling fluids has always been a challenging task.
Optimization techniques rely on approximating simulation states towards target
velocity or density field configurations, which are often handcrafted by
artists to indirectly control smoke dynamics. Patch synthesis techniques
transfer image textures or simulation features to a target flow field. However,
these are either limited to adding structural patterns or augmenting coarse
flows with turbulent structures, and hence cannot capture the full spectrum of
different styles and semantically complex structures. In this paper, we propose
the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric
smoke data. Our method is able to transfer features from natural images to
smoke simulations, enabling general content-aware manipulations ranging from
simple patterns to intricate motifs. The proposed algorithm is physically
inspired, since it computes the density transport from a source input smoke to
a desired target configuration. Our transport-based approach allows direct
control over the divergence of the stylization velocity field by optimizing
incompressible and irrotational potentials that transport smoke towards
stylization. Temporal consistency is ensured by transporting and aligning
subsequent stylized velocities, and 3D reconstructions are computed by
seamlessly merging stylizations from different camera viewpoints.Comment: ACM Transaction on Graphics (SIGGRAPH ASIA 2019), additional
materials: http://www.byungsoo.me/project/neural-flow-styl
Neural Smoke Stylization with Color Transfer
Artistically controlling fluid simulations requires a large amount of manual
work by an artist. The recently presented transportbased neural style transfer
approach simplifies workflows as it transfers the style of arbitrary input
images onto 3D smoke simulations. However, the method only modifies the shape
of the fluid but omits color information. In this work, we therefore extend the
previous approach to obtain a complete pipeline for transferring shape and
color information onto 2D and 3D smoke simulations with neural networks. Our
results demonstrate that our method successfully transfers colored style
features consistently in space and time to smoke data for different input
textures.Comment: Submitted to Eurographics202
Lagrangian Neural Style Transfer for Fluids
Artistically controlling the shape, motion and appearance of fluid
simulations pose major challenges in visual effects production. In this paper,
we present a neural style transfer approach from images to 3D fluids formulated
in a Lagrangian viewpoint. Using particles for style transfer has unique
benefits compared to grid-based techniques. Attributes are stored on the
particles and hence are trivially transported by the particle motion. This
intrinsically ensures temporal consistency of the optimized stylized structure
and notably improves the resulting quality. Simultaneously, the expensive,
recursive alignment of stylization velocity fields of grid approaches is
unnecessary, reducing the computation time to less than an hour and rendering
neural flow stylization practical in production settings. Moreover, the
Lagrangian representation improves artistic control as it allows for
multi-fluid stylization and consistent color transfer from images, and the
generality of the method enables stylization of smoke and liquids likewise.Comment: ACM Transaction on Graphics (SIGGRAPH 2020), additional materials:
http://www.byungsoo.me/project/lnst/index.htm
Meta-Learning Dynamics Forecasting Using Task Inference
Current deep learning models for dynamics forecasting struggle with
generalization. They can only forecast in a specific domain and fail when
applied to systems with different parameters, external forces, or boundary
conditions. We propose a model-based meta-learning method called DyAd which can
generalize across heterogeneous domains by partitioning them into different
tasks. DyAd has two parts: an encoder which infers the time-invariant hidden
features of the task with weak supervision, and a forecaster which learns the
shared dynamics of the entire domain. The encoder adapts and controls the
forecaster during inference using adaptive instance normalization and adaptive
padding. Theoretically, we prove that the generalization error of such
procedure is related to the task relatedness in the source domain, as well as
the domain differences between source and target. Experimentally, we
demonstrate that our model outperforms state-of-the-art approaches on both
turbulent flow and real-world ocean data forecasting tasks