8,687 research outputs found
Transport-Based Neural Style Transfer for Smoke Simulations
Artistically controlling fluids has always been a challenging task.
Optimization techniques rely on approximating simulation states towards target
velocity or density field configurations, which are often handcrafted by
artists to indirectly control smoke dynamics. Patch synthesis techniques
transfer image textures or simulation features to a target flow field. However,
these are either limited to adding structural patterns or augmenting coarse
flows with turbulent structures, and hence cannot capture the full spectrum of
different styles and semantically complex structures. In this paper, we propose
the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric
smoke data. Our method is able to transfer features from natural images to
smoke simulations, enabling general content-aware manipulations ranging from
simple patterns to intricate motifs. The proposed algorithm is physically
inspired, since it computes the density transport from a source input smoke to
a desired target configuration. Our transport-based approach allows direct
control over the divergence of the stylization velocity field by optimizing
incompressible and irrotational potentials that transport smoke towards
stylization. Temporal consistency is ensured by transporting and aligning
subsequent stylized velocities, and 3D reconstructions are computed by
seamlessly merging stylizations from different camera viewpoints.Comment: ACM Transaction on Graphics (SIGGRAPH ASIA 2019), additional
materials: http://www.byungsoo.me/project/neural-flow-styl
Frustum PointNets for 3D Object Detection from RGB-D Data
In this work, we study 3D object detection from RGB-D data in both indoor and
outdoor scenes. While previous methods focus on images or 3D voxels, often
obscuring natural 3D patterns and invariances of 3D data, we directly operate
on raw point clouds by popping up RGB-D scans. However, a key challenge of this
approach is how to efficiently localize objects in point clouds of large-scale
scenes (region proposal). Instead of solely relying on 3D proposals, our method
leverages both mature 2D object detectors and advanced 3D deep learning for
object localization, achieving efficiency as well as high recall for even small
objects. Benefited from learning directly in raw point clouds, our method is
also able to precisely estimate 3D bounding boxes even under strong occlusion
or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection
benchmarks, our method outperforms the state of the art by remarkable margins
while having real-time capability.Comment: 15 pages, 12 figures, 14 table
Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging
Many graphics and vision problems can be expressed as non-linear least
squares optimizations of objective functions over visual data, such as images
and meshes. The mathematical descriptions of these functions are extremely
concise, but their implementation in real code is tedious, especially when
optimized for real-time performance on modern GPUs in interactive applications.
In this work, we propose a new language, Opt (available under
http://optlang.org), for writing these objective functions over image- or
graph-structured unknowns concisely and at a high level. Our compiler
automatically transforms these specifications into state-of-the-art GPU solvers
based on Gauss-Newton or Levenberg-Marquardt methods. Opt can generate
different variations of the solver, so users can easily explore tradeoffs in
numerical precision, matrix-free methods, and solver approaches. In our
results, we implement a variety of real-world graphics and vision applications.
Their energy functions are expressible in tens of lines of code, and produce
highly-optimized GPU solver implementations. These solver have performance
competitive with the best published hand-tuned, application-specific GPU
solvers, and orders of magnitude beyond a general-purpose auto-generated
solver
Learning a Hierarchical Latent-Variable Model of 3D Shapes
We propose the Variational Shape Learner (VSL), a generative model that
learns the underlying structure of voxelized 3D shapes in an unsupervised
fashion. Through the use of skip-connections, our model can successfully learn
and infer a latent, hierarchical representation of objects. Furthermore,
realistic 3D objects can be easily generated by sampling the VSL's latent
probabilistic manifold. We show that our generative model can be trained
end-to-end from 2D images to perform single image 3D model retrieval.
Experiments show, both quantitatively and qualitatively, the improved
generalization of our proposed model over a range of tasks, performing better
or comparable to various state-of-the-art alternatives.Comment: Accepted as oral presentation at International Conference on 3D
Vision (3DV), 201
Calipso: Physics-based Image and Video Editing through CAD Model Proxies
We present Calipso, an interactive method for editing images and videos in a
physically-coherent manner. Our main idea is to realize physics-based
manipulations by running a full physics simulation on proxy geometries given by
non-rigidly aligned CAD models. Running these simulations allows us to apply
new, unseen forces to move or deform selected objects, change physical
parameters such as mass or elasticity, or even add entire new objects that
interact with the rest of the underlying scene. In Calipso, the user makes
edits directly in 3D; these edits are processed by the simulation and then
transfered to the target 2D content using shape-to-image correspondences in a
photo-realistic rendering process. To align the CAD models, we introduce an
efficient CAD-to-image alignment procedure that jointly minimizes for rigid and
non-rigid alignment while preserving the high-level structure of the input
shape. Moreover, the user can choose to exploit image flow to estimate scene
motion, producing coherent physical behavior with ambient dynamics. We
demonstrate Calipso's physics-based editing on a wide range of examples
producing myriad physical behavior while preserving geometric and visual
consistency.Comment: 11 page
GRASS: Generative Recursive Autoencoders for Shape Structures
We introduce a novel neural network architecture for encoding and synthesis
of 3D shapes, particularly their structures. Our key insight is that 3D shapes
are effectively characterized by their hierarchical organization of parts,
which reflects fundamental intra-shape relationships such as adjacency and
symmetry. We develop a recursive neural net (RvNN) based autoencoder to map a
flat, unlabeled, arbitrary part layout to a compact code. The code effectively
captures hierarchical structures of man-made 3D objects of varying structural
complexities despite being fixed-dimensional: an associated decoder maps a code
back to a full hierarchy. The learned bidirectional mapping is further tuned
using an adversarial setup to yield a generative model of plausible structures,
from which novel structures can be sampled. Finally, our structure synthesis
framework is augmented by a second trained module that produces fine-grained
part geometry, conditioned on global and local structural context, leading to a
full generative pipeline for 3D shapes. We demonstrate that without
supervision, our network learns meaningful structural hierarchies adhering to
perceptual grouping principles, produces compact codes which enable
applications such as shape classification and partial matching, and supports
shape synthesis and interpolation with significant variations in topology and
geometry.Comment: Corresponding author: Kai Xu ([email protected]
- …