81,714 research outputs found
An intuitive control space for material appearance
Many different techniques for measuring material appearance have been
proposed in the last few years. These have produced large public datasets,
which have been used for accurate, data-driven appearance modeling. However,
although these datasets have allowed us to reach an unprecedented level of
realism in visual appearance, editing the captured data remains a challenge. In
this paper, we present an intuitive control space for predictable editing of
captured BRDF data, which allows for artistic creation of plausible novel
material appearances, bypassing the difficulty of acquiring novel samples. We
first synthesize novel materials, extending the existing MERL dataset up to 400
mathematically valid BRDFs. We then design a large-scale experiment, gathering
56,000 subjective ratings on the high-level perceptual attributes that best
describe our extended dataset of materials. Using these ratings, we build and
train networks of radial basis functions to act as functionals mapping the
perceptual attributes to an underlying PCA-based representation of BRDFs. We
show that our functionals are excellent predictors of the perceived attributes
of appearance. Our control space enables many applications, including intuitive
material editing of a wide range of visual properties, guidance for gamut
mapping, analysis of the correlation between perceptual attributes, or novel
appearance similarity metrics. Moreover, our methodology can be used to derive
functionals applicable to classic analytic BRDF representations. We release our
code and dataset publicly, in order to support and encourage further research
in this direction
Transport-Based Neural Style Transfer for Smoke Simulations
Artistically controlling fluids has always been a challenging task.
Optimization techniques rely on approximating simulation states towards target
velocity or density field configurations, which are often handcrafted by
artists to indirectly control smoke dynamics. Patch synthesis techniques
transfer image textures or simulation features to a target flow field. However,
these are either limited to adding structural patterns or augmenting coarse
flows with turbulent structures, and hence cannot capture the full spectrum of
different styles and semantically complex structures. In this paper, we propose
the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric
smoke data. Our method is able to transfer features from natural images to
smoke simulations, enabling general content-aware manipulations ranging from
simple patterns to intricate motifs. The proposed algorithm is physically
inspired, since it computes the density transport from a source input smoke to
a desired target configuration. Our transport-based approach allows direct
control over the divergence of the stylization velocity field by optimizing
incompressible and irrotational potentials that transport smoke towards
stylization. Temporal consistency is ensured by transporting and aligning
subsequent stylized velocities, and 3D reconstructions are computed by
seamlessly merging stylizations from different camera viewpoints.Comment: ACM Transaction on Graphics (SIGGRAPH ASIA 2019), additional
materials: http://www.byungsoo.me/project/neural-flow-styl
DeLight-Net: Decomposing Reflectance Maps into Specular Materials and Natural Illumination
In this paper we are extracting surface reflectance and natural environmental
illumination from a reflectance map, i.e. from a single 2D image of a sphere of
one material under one illumination. This is a notoriously difficult problem,
yet key to various re-rendering applications. With the recent advances in
estimating reflectance maps from 2D images their further decomposition has
become increasingly relevant.
To this end, we propose a Convolutional Neural Network (CNN) architecture to
reconstruct both material parameters (i.e. Phong) as well as illumination (i.e.
high-resolution spherical illumination maps), that is solely trained on
synthetic data. We demonstrate that decomposition of synthetic as well as real
photographs of reflectance maps, both in High Dynamic Range (HDR), and, for the
first time, on Low Dynamic Range (LDR) as well. Results are compared to
previous approaches quantitatively as well as qualitatively in terms of
re-renderings where illumination, material, view or shape are changed.Comment: Stamatios Georgoulis and Konstantinos Rematas contributed equally to
this wor
From Multiview Image Curves to 3D Drawings
Reconstructing 3D scenes from multiple views has made impressive strides in
recent years, chiefly by correlating isolated feature points, intensity
patterns, or curvilinear structures. In the general setting - without
controlled acquisition, abundant texture, curves and surfaces following
specific models or limiting scene complexity - most methods produce unorganized
point clouds, meshes, or voxel representations, with some exceptions producing
unorganized clouds of 3D curve fragments. Ideally, many applications require
structured representations of curves, surfaces and their spatial relationships.
This paper presents a step in this direction by formulating an approach that
combines 2D image curves into a collection of 3D curves, with topological
connectivity between them represented as a 3D graph. This results in a 3D
drawing, which is complementary to surface representations in the same sense as
a 3D scaffold complements a tent taut over it. We evaluate our results against
truth on synthetic and real datasets.Comment: Expanded ECCV 2016 version with tweaked figures and including an
overview of the supplementary material available at
multiview-3d-drawing.sourceforge.ne
MoSculp: Interactive Visualization of Shape and Time
We present a system that allows users to visualize complex human motion via
3D motion sculptures---a representation that conveys the 3D structure swept by
a human body as it moves through space. Given an input video, our system
computes the motion sculptures and provides a user interface for rendering it
in different styles, including the options to insert the sculpture back into
the original video, render it in a synthetic scene or physically print it.
To provide this end-to-end workflow, we introduce an algorithm that estimates
that human's 3D geometry over time from a set of 2D images and develop a
3D-aware image-based rendering approach that embeds the sculpture back into the
scene. By automating the process, our system takes motion sculpture creation
out of the realm of professional artists, and makes it applicable to a wide
range of existing video material.
By providing viewers with 3D information, motion sculptures reveal space-time
motion information that is difficult to perceive with the naked eye, and allow
viewers to interpret how different parts of the object interact over time. We
validate the effectiveness of this approach with user studies, finding that our
motion sculpture visualizations are significantly more informative about motion
than existing stroboscopic and space-time visualization methods.Comment: UIST 2018. Project page: http://mosculp.csail.mit.edu
Interferometric Evidence for Resolved Warm Dust in the DQ Tau System
We report on near-infrared (IR) interferometric observations of the
double-lined pre-main sequence (PMS) binary system DQ Tau. We model these data
with a visual orbit for DQ Tau supported by the spectroscopic orbit & analysis
of \citet{Mathieu1997}. Further, DQ Tau exhibits significant near-IR excess;
modeling our data requires inclusion of near-IR light from an 'excess' source.
Remarkably the excess source is resolved in our data, similar in scale to the
binary itself ( 0.2 AU at apastron), rather than the larger circumbinary
disk ( 0.4 AU radius). Our observations support the \citet{Mathieu1997}
and \citet{Carr2001} inference of significant warm material near the DQ Tau
binary.Comment: 14 pgs, 3 figures, ApJL in pres
- …